Approximate message passing (AMP) methods and their variants have attractedconsiderable recent attention for the problem of estimating a random vector$\mathbf{x}$ observed through a linear transform $\mathbf{A}$. In the case oflarge i.i.d. zero-mean Gaussian $\mathbf{A}$, the methods exhibit fastconvergence with precise analytic characterizations on the algorithm behavior.However, the convergence of AMP under general transforms $\mathbf{A}$ is notfully understood. In this paper, we provide sufficient conditions for theconvergence of a damped version of the generalized AMP (GAMP) algorithm in thecase of quadratic cost functions (i.e., Gaussian likelihood and prior). It isshown that, with sufficient damping, the algorithm is guaranteed to converge,although the amount of damping grows with peak-to-average ratio of the squaredsingular values of the transforms $\mathbf{A}$. This result explains the goodperformance of AMP on i.i.d. Gaussian transforms $\mathbf{A}$, but also theirdifficulties with ill-conditioned or non-zero-mean transforms $\mathbf{A}$. Arelated sufficient condition is then derived for the local stability of thedamped GAMP method under general cost functions, assuming certain strictconvexity conditions.
展开▼